The ever-growing deep learning technologies are making revolutionary changes for modern life. However, conventional computing architectures are designed to process sequential and digital programs, being extremely burdened with performing massive parallel and adaptive deep learning applications. Photonic integrated circuits provide an efficient approach to mitigate bandwidth limitations and power-wall brought by its electronic counterparts, showing great potential in ultrafast and energy-free high-performance computing. Here, we propose an optical computing architecture enabled by on-chip diffraction to implement convolutional acceleration, termed optical convolution unit (OCU). We demonstrate that any real-valued convolution kernels can be exploited by OCU with a prominent computational throughput boosting via the concept of structral re-parameterization. With OCU as the fundamental unit, we build an optical convolutional neural network (oCNN) to implement two popular deep learning tasks: classification and regression. For classification, Fashion-MNIST and CIFAR-4 datasets are tested with accuracy of 91.63% and 86.25%, respectively. For regression, we build an optical denoising convolutional neural network (oDnCNN) to handle Gaussian noise in gray scale images with noise level {\sigma} = 10, 15, 20, resulting clean images with average PSNR of 31.70dB, 29.39dB and 27.72dB, respectively. The proposed OCU presents remarkable performance of low energy consumption and high information density due to its fully passive nature and compact footprint, providing a highly parallel while lightweight solution for future computing architecture to handle high dimensional tensors in deep learning.
translated by 谷歌翻译
3D point clouds are rich in geometric structure information, while 2D images contain important and continuous texture information. Combining 2D information to achieve better 3D semantic segmentation has become mainstream in 3D scene understanding. Albeit the success, it still remains elusive how to fuse and process the cross-dimensional features from these two distinct spaces. Existing state-of-the-art usually exploit bidirectional projection methods to align the cross-dimensional features and realize both 2D & 3D semantic segmentation tasks. However, to enable bidirectional mapping, this framework often requires a symmetrical 2D-3D network structure, thus limiting the network's flexibility. Meanwhile, such dual-task settings may distract the network easily and lead to over-fitting in the 3D segmentation task. As limited by the network's inflexibility, fused features can only pass through a decoder network, which affects model performance due to insufficient depth. To alleviate these drawbacks, in this paper, we argue that despite its simplicity, projecting unidirectionally multi-view 2D deep semantic features into the 3D space aligned with 3D deep semantic features could lead to better feature fusion. On the one hand, the unidirectional projection enforces our model focused more on the core task, i.e., 3D segmentation; on the other hand, unlocking the bidirectional to unidirectional projection enables a deeper cross-domain semantic alignment and enjoys the flexibility to fuse better and complicated features from very different spaces. In joint 2D-3D approaches, our proposed method achieves superior performance on the ScanNetv2 benchmark for 3D semantic segmentation.
translated by 谷歌翻译
Generating consistent and high-quality images from given texts is essential for visual-language understanding. Although impressive results have been achieved in generating high-quality images, text-image consistency is still a major concern in existing GAN-based methods. Particularly, the most popular metric $R$-precision may not accurately reflect the text-image consistency, often resulting in very misleading semantics in the generated images. Albeit its significance, how to design a better text-image consistency metric surprisingly remains under-explored in the community. In this paper, we make a further step forward to develop a novel CLIP-based metric termed as Semantic Similarity Distance ($SSD$), which is both theoretically founded from a distributional viewpoint and empirically verified on benchmark datasets. Benefiting from the proposed metric, we further design the Parallel Deep Fusion Generative Adversarial Networks (PDF-GAN) that aims at improving text-image consistency by fusing semantic information at different granularities and capturing accurate semantics. Equipped with two novel plug-and-play components: Hard-Negative Sentence Constructor and Semantic Projection, the proposed PDF-GAN can mitigate inconsistent semantics and bridge the text-image semantic gap. A series of experiments show that, as opposed to current state-of-the-art methods, our PDF-GAN can lead to significantly better text-image consistency while maintaining decent image quality on the CUB and COCO datasets.
translated by 谷歌翻译
点云上的实例分割对于3D场景的理解至关重要。距离聚类通常用于最新方法(SOTA),该方法通常是有效的,但在用相同的语义标签(尤其是在共享相邻点)的相邻对象中表现不佳。由于偏移点的分布不均匀,这些现有方法几乎不能集中所有实例点。为此,我们设计了一种新颖的鸿沟和征服策略,并提出了一个名为PBNET的端到端网络,该网络将每个点二进制并分别将它们簇簇为细分实例。 PBNET将偏移实例点分为两类:高密度点(HPS vs.lps),然后分别征服。可以通过删除LPS清楚地分离相邻的对象,然后通过通过邻居投票方法分配LP来完成和完善。为了进一步减少聚类误差,我们根据平均大小开发迭代合并算法,以汇总片段实例。 ScannETV2和S3DIS数据集的实验表明了我们的模型的优势。尤其是,PBNET在ScannETV2官方基准挑战(验证集)上实现了迄今为止最好的AP50和AP25,同时证明了高效率。
translated by 谷歌翻译
无监督的交叉模式医学图像适应旨在减轻不同成像方式之间的严重域间隙,而无需使用目标域标签。该活动的关键依赖于对齐源和目标域的分布。一种常见的尝试是强制两个域之间的全局对齐,但是,这忽略了致命的局部不平衡域间隙问题,即,一些具有较大域间隙的局部特征很难转移。最近,某些方法进行一致性,重点是地方区域,以提高模型学习的效率。尽管此操作可能会导致上下文中关键信息的缺陷。为了应对这一限制,我们提出了一种新的策略,以减轻医学图像的特征,即全球本地联盟的一致性,以减轻域间隙不平衡。具体而言,功能 - 触发样式转移模块首先合成类似目标的源包含图像,以减少全局域间隙。然后,集成了本地功能掩码,以通过优先考虑具有较大域间隙的判别特征来减少本地特征的“间隙”。全球和局部对齐的这种组合可以精确地将关键区域定位在分割目标中,同时保持整体语义一致性。我们进行了一系列具有两个跨模式适应任务的实验,i,e。心脏子结构和腹部多器官分割。实验结果表明,我们的方法在这两个任务中都达到了最新的性能。
translated by 谷歌翻译
我们考虑了多视图3D面部重建(MVR)的问题,该问题具有弱监督的学习,该学习利用有限数量的2D脸部图像(例如3)生成具有非常光注释的高质量3D面部模型。尽管其表现令人鼓舞,但现在的MVR方法简单地加入了多视图图像特征,而对关键区域(例如眼睛,眉毛,鼻子和嘴巴)的关注更少。为此,我们提出了一个名为Deep Fusion MVR(DF-MVR)的新型模型,并设计了具有跳过连接的单个解码框架的多视图编码,能够提取,集成和补偿深层特征,并从多视图中注意图片。此外,我们开发了一个多视图面对解析网络,以学习,识别和强调关键的共同面部领域。最后,尽管我们的模型经过了几个2D图像的训练,但即使输入一个2D图像,它也可以重建准确的3D模型。我们进行了广泛的实验,以评估各种多视图3D面部重建方法。对像素面和Bosphorus数据集的实验表明了我们的模型的优势。如果没有3D地标注释,DF-MVR分别比现有最佳弱监督的MVR在像素 - 脸和Bosphorus数据集上分别实现了5.2%和3.0%的RMSE改善;有了3D地标注释,DF-MVR在Pixel-Face数据集上的表现出色,与最佳弱监督MVR模型相比,RMSE改善13.4%。
translated by 谷歌翻译
虽然大多数当前的图像支出都进行了水平外推,但我们研究了广义图像支出问题,这些问题将视觉上下文推断出给定图像周围的全面。为此,我们开发了一个新型的基于变压器的生成对抗网络,称为U-Transformer,能够扩展具有合理结构和细节的图像边界,即使是复杂的风景图像。具体而言,我们将生成器设计为嵌入流行的Swin Transformer块的编码器到二次结构。因此,我们的新型框架可以更好地应对图像远程依赖性,这对于广义图像支出至关重要。我们另外提出了U形结构和多视图时间空间预测网络,以增强图像自我重建以及未知的零件预测。我们在实验上证明,我们提出的方法可以为针对最新图像支出方法提供广义图像支出产生可吸引人的结果。
translated by 谷歌翻译
Supervised Deep-Learning (DL)-based reconstruction algorithms have shown state-of-the-art results for highly-undersampled dynamic Magnetic Resonance Imaging (MRI) reconstruction. However, the requirement of excessive high-quality ground-truth data hinders their applications due to the generalization problem. Recently, Implicit Neural Representation (INR) has appeared as a powerful DL-based tool for solving the inverse problem by characterizing the attributes of a signal as a continuous function of corresponding coordinates in an unsupervised manner. In this work, we proposed an INR-based method to improve dynamic MRI reconstruction from highly undersampled k-space data, which only takes spatiotemporal coordinates as inputs. Specifically, the proposed INR represents the dynamic MRI images as an implicit function and encodes them into neural networks. The weights of the network are learned from sparsely-acquired (k, t)-space data itself only, without external training datasets or prior images. Benefiting from the strong implicit continuity regularization of INR together with explicit regularization for low-rankness and sparsity, our proposed method outperforms the compared scan-specific methods at various acceleration factors. E.g., experiments on retrospective cardiac cine datasets show an improvement of 5.5 ~ 7.1 dB in PSNR for extremely high accelerations (up to 41.6-fold). The high-quality and inner continuity of the images provided by INR has great potential to further improve the spatiotemporal resolution of dynamic MRI, without the need of any training data.
translated by 谷歌翻译
Neural Radiance Field (NeRF) has widely received attention in Sparse-View Computed Tomography (SVCT) reconstruction tasks as a self-supervised deep learning framework. NeRF-based SVCT methods represent the desired CT image as a continuous function of spatial coordinates and train a Multi-Layer Perceptron (MLP) to learn the function by minimizing loss on the SV sinogram. Benefiting from the continuous representation provided by NeRF, the high-quality CT image can be reconstructed. However, existing NeRF-based SVCT methods strictly suppose there is completely no relative motion during the CT acquisition because they require \textit{accurate} projection poses to model the X-rays that scan the SV sinogram. Therefore, these methods suffer from severe performance drops for real SVCT imaging with motion. In this work, we propose a self-calibrating neural field to recover the artifacts-free image from the rigid motion-corrupted SV sinogram without using any external data. Specifically, we parametrize the inaccurate projection poses caused by rigid motion as trainable variables and then jointly optimize these pose variables and the MLP. We conduct numerical experiments on a public CT image dataset. The results indicate our model significantly outperforms two representative NeRF-based methods for SVCT reconstruction tasks with four different levels of rigid motion.
translated by 谷歌翻译
近年来,由于其在数字人物,角色产生和动画中的广泛应用,人们对3D人脸建模的兴趣越来越大。现有方法压倒性地强调了对面部的外部形状,质地和皮肤特性建模,而忽略了内部骨骼结构和外观之间的固有相关性。在本文中,我们使用学习的参数面部发电机提出了雕塑家,具有骨骼一致性的3D面部创作,旨在通过混合参数形态表示轻松地创建解剖上正确和视觉上令人信服的面部模型。雕塑家的核心是露西(Lucy),这是与整形外科医生合作的第一个大型形状面部脸部数据集。我们的Lucy数据集以最古老的人类祖先之一的化石命名,其中包含正牙手术前后全人头的高质量计算机断层扫描(CT)扫描,这对于评估手术结果至关重要。露西(Lucy)由144次扫描,分别对72名受试者(31名男性和41名女性)组成,其中每个受试者进行了两次CT扫描,并在恐惧后手术中进行了两次CT扫描。根据我们的Lucy数据集,我们学习了一个新颖的骨骼一致的参数面部发电机雕塑家,它可以创建独特而细微的面部特征,以帮助定义角色,同时保持生理声音。我们的雕塑家通过将3D脸的描绘成形状混合形状,姿势混合形状和面部表达混合形状,共同在统一数据驱动的框架下共同建模头骨,面部几何形状和面部外观。与现有方法相比,雕塑家在面部生成任务中保留了解剖学正确性和视觉现实主义。最后,我们展示了雕塑家在以前看不见的各种花式应用中的鲁棒性和有效性。
translated by 谷歌翻译